Video representation learning has been successful in video-text pre-training for zero-shot transfer, where each sentence is trained to be close to the paired video clips in a common feature space. For long videos, given a paragraph of description where the sentences describe different segments of the video, by matching all sentence-clip pairs, the paragraph and the full video are aligned implicitly. However, such unit-level similarity measure may ignore the global temporal context over a long time span, which inevitably limits the generalization ability. In this paper, we propose a contrastive learning framework TempCLR to compare the full video and the paragraph explicitly. As the video/paragraph is formulated as a sequence of clips/sentences, under the constraint of their temporal order, we use dynamic time warping to compute the minimum cumulative cost over sentence-clip pairs as the sequence-level distance. To explore the temporal dynamics, we break the consistency of temporal order by shuffling the video clips or sentences according to the temporal granularity. In this way, we obtain the representations for clips/sentences, which perceive the temporal information and thus facilitate the sequence alignment. In addition to pre-training on the video and paragraph, our approach can also generalize on the matching between different video instances. We evaluate our approach on video retrieval, action step localization, and few-shot action recognition, and achieve consistent performance gain over all three tasks. Detailed ablation studies are provided to justify the approach design.
translated by 谷歌翻译
给定图像和参考字幕,图像标题编辑任务旨在纠正未对准错误并生成精制的字幕。但是,所有现有的字幕编辑作品都是隐式模型,即它们直接生成精制字幕而无需与参考标题明确连接。在本文中,我们介绍了一项新任务:显式字幕编辑(ECE)。 ECE模型明确生成了一系列编辑操作,此编辑操作序列可以将参考字幕转换为精制的字幕。与隐式编辑相比,ECE具有多个优点:1)可解释:它可以追踪整个编辑路径。 2)编辑有效:它只需要修改几个单词。 3)像人类一样:它类似于人类执行字幕编辑的方式,并试图保持原始句子结构。为了解决这项新任务,我们提出了第一个ECE模型:Tiger。 Tiger是一种非自动回形变压器的模型,由三个模块组成:Tagger_del,Tagger_Add和Inserter。具体而言,Tagger_del决定是否应该保留每个单词,Tagger_add决定添加新单词的位置,而Inserster预测了添加的特定单词。为了进一步促进ECE研究,我们分别重新组织了两个现有数据集,分别为Coco-EE和FlickR30K-EE,提出了两个新的ECE基准。两个基准上的大量消融都证明了老虎的有效性。
translated by 谷歌翻译
很少有射击对象检测(FSOD),目的是使用很少的培训示例来检测新颖的对象,最近对社区引起了极大的研究兴趣。基于度量学习的方法已证明使用基于两分支的暹罗网络对此任务有效,并计算图像区域之间的相似性和几乎没有射击示例以进行检测。但是,在以前的工作中,两个分支之间的相互作用仅在检测头中受到限制,而将其余数百个层留在单独的特征提取中。受到有关视觉变压器和视觉变压器的最新工作的启发,我们通过将交叉转换器纳入功能骨干和检测头中,提出了一种新颖的FSOD基于跨变速器的模型(FCT)。提出了不对称批次的交叉注意,以从不同批次大小的两个分支中汇总关键信息。我们的模型可以通过引入多级交互来改善两个分支之间的几个相似性学习。对Pascal VOC和MSCOCO FSOD基准测试的全面实验证明了我们模型的有效性。
translated by 谷歌翻译
少量对象检测(FSOD)旨在使用少数示例来检测从未见过的对象。通过学习如何在查询图像和少量拍摄类示例之间进行匹配,因此可以通过学习如何匹配来实现最近的改进,使得学习模型可以概括为几滴新颖的类。然而,目前,大多数基于元学习的方法分别在查询图像区域(通常是提议)和新颖类之间执行成对匹配,因此无法考虑它们之间的多个关系。在本文中,我们使用异构图卷积网络提出了一种新颖的FSOD模型。通过具有三种不同类型的边缘的所有提议和类节点之间的有效消息,我们可以获得每个类的上下文感知提案功能和查询 - 自适应,多包子增强型原型表示,这可能有助于促进成对匹配和改进的最终决赛FSOD精度。广泛的实验结果表明,我们所提出的模型表示为QA的Qa-Netwet,优于不同拍摄和评估指标下的Pascal VOC和MSCOCO FSOD基准测试的当前最先进的方法。
translated by 谷歌翻译
少量对象检测(FSOD)旨在仅使用几个例子来检测对象。如何将最先进的对象探测器适应几个拍摄域保持挑战性。对象提案是现代物体探测器中的关键成分。然而,使用现有方法对于几张拍摄类生成的提案质量远远差,而不是许多拍摄类,例如,由于错误分类或不准确的空间位置而导致的少量拍摄类丢失的框。为了解决嘈杂的提案问题,我们通过联合优化几次提案生成和细粒度的少量提案分类,提出了一种新的Meta学习的FSOD模型。为了提高几张拍摄类的提议生成,我们建议学习基于轻量级的公制学习的原型匹配网络,而不是传统的简单线性对象/非目标分类器,例如,在RPN中使用。我们具有特征融合网络的非线性分类器可以提高鉴别性原型匹配和少拍摄类的提案回忆。为了提高细粒度的少量提案分类,我们提出了一种新的细节特征对准方法,以解决嘈杂的提案和少量拍摄类之间的空间未对准,从而提高了几次对象检测的性能。同时,我们学习一个单独的R-CNN检测头,用于多射击基础类,并表现出维护基础课程知识的强大性能。我们的模型在大多数射击和指标上实现了多个FSOD基准的最先进的性能。
translated by 谷歌翻译
我们研究了很少的开放式识别(FSOR)的问题,该问题学习了一个能够快速适应新类的识别系统,具有有限的标签示例和对未知负样本的拒绝。由于数据限制,传统的大规模开放式方法对FSOR问题有效无效。当前的FSOR方法通常校准了几个弹出封闭式分类器对负样品敏感的,因此可以通过阈值拒绝它们。但是,阈值调整是一个具有挑战性的过程,因为不同的FSOR任务可能需要不同的拒绝功能。在本文中,我们提出了任务自适应的负面类别设想,以使FSOR集成阈值调整到学习过程中。具体而言,我们增加了几个封闭式分类器,并使用少量示例产生的其他负面原型。通过在负生成过程中纳入很少的类相关性,我们可以学习FSOR任务的动态拒绝边界。此外,我们将我们的方法扩展到概括的少数开放式识别(GFSOR),该识别需要在许多射击和少数类别上进行分类以及拒绝​​负样本。公共基准的广泛实验验证了我们在这两个问题上的方法。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Learning the underlying distribution of molecular graphs and generating high-fidelity samples is a fundamental research problem in drug discovery and material science. However, accurately modeling distribution and rapidly generating novel molecular graphs remain crucial and challenging goals. To accomplish these goals, we propose a novel Conditional Diffusion model based on discrete Graph Structures (CDGS) for molecular graph generation. Specifically, we construct a forward graph diffusion process on both graph structures and inherent features through stochastic differential equations (SDE) and derive discrete graph structures as the condition for reverse generative processes. We present a specialized hybrid graph noise prediction model that extracts the global context and the local node-edge dependency from intermediate graph states. We further utilize ordinary differential equation (ODE) solvers for efficient graph sampling, based on the semi-linear structure of the probability flow ODE. Experiments on diverse datasets validate the effectiveness of our framework. Particularly, the proposed method still generates high-quality molecular graphs in a limited number of steps.
translated by 谷歌翻译